974 research outputs found

    Estimating the Structure of the Payment Network in the LVTS: An Application of Estimating Communities in Network Data

    Get PDF
    In the Canadian large value payment system an important goal is to understand how liquidity is transferred through the system and hence how efficient the system is in settling payments. Understanding the structure of the underlying network of relationships between participants in the payment system is a crucial step in achieving the goal. The set of nodes in any given network can be partitioned into a number of groups (or "communities"). Usually, the partition is not directly observable and must be inferred from the observed data of interaction flows between all nodes. In this paper we use the statistical model of Copic, Jackson, and Kirman (2007) to estimate the most likely partition in the network of business relationships in the LVTS. Specifically, we estimate from the LVTS transactions data different "communities" formed by the direct participants in the system. Using various measures of transaction intensity, we uncover communities of participants that are based on both transaction amount and their physical locations. More importantly these communities were not easily discernible in previous studies of LVTS data since previous studies did not take into account the network (or transitive) aspects of the data.Payment, clearing, and settlement systems; Financial stability

    Graded-index fiber collimator enhanced extrinsic Fabry-Perot interferometer

    Get PDF
    In this thesis, we studied a fringe visibility enhanced extrinsic Fabry-Perot interferometer (EFPI) by fusion splicing a quarter-pitch graded-index fiber (GIF) fiber to the lead-in single-mode fiber (SMF). The performance of the GIF collimator is theoretically analyzed using a ray matrix model and experimentally verified through beam divergence angle measurements. The fringe visibility of the GIF-collimated EFPI is measured as a function of the cavity length and compared with that of a regular SMF-EFPI. At the cavity length of 500µm, the fringe visibility of the GIF-EFPI is 0.8 while that of the SMF-EFPI is only 0.2. The visibility enhanced GIF-EFPI provides better a signal-to-noise (SNR) for applications where a large dynamic range is desired --Abstract, page iii

    Fiber inline pressure and acoustic sensor fabricated with femtosecond laser

    Get PDF
    Pressure and acoustic measurements are required in many industrial applications such as down-hole oil well monitoring, structural heath monitoring, engine monitoring, study of aerodynamics, etc. Conventional sensors are difficult to apply due to the high temperature, electromagnetic-interference noise and limited space in such environments. Fiber optic sensors have been developed since the last century and have proved themselves good candidates in such harsh environment. This dissertation aims to design, develop and demonstrate miniaturized fiber pressure/acoustic sensors for harsh environment applications through femtosecond laser fabrication. Working towards this objective, the dissertation explored two types of fiber inline microsensors fabricated by femtosecond laser: an extrinsic Fabry-Perot interferometric (EFPI) sensor with silica diaphragm for pressure/acoustic sensing, and an intrinisic Fabry-Perot interferometer (IFPI) for temperature sensing. The scope of the dissertation work consists of device design, device modeling/simulation, laser fabrication system setups, signal processing method development and sensor performance evaluation and demonstration. This research work provides theoretical and experimental evidences that the femtosecond laser fabrication technique is a valid tool to fabricate miniaturized fiber optic pressure and temperature sensors which possess advantages over currently developed sensors --Abstract, page iii

    Interlocking structure design and assembly

    Get PDF
    Many objects in our life are not manufactured as whole rigid pieces. Instead, smaller components are made to be later assembled into larger structures. Chairs are assembled from wooden pieces, cabins are made of logs, and buildings are constructed from bricks. These components are commonly designed by many iterations of human thinking. In this report, we will look at a few problems related to interlocking components design and assembly. Given an atomic object, how can we design a package that holds the object firmly without a gap in-between? How many pieces should the package be partitioned into? How can we assemble/extract each piece? We will attack this problem by first looking at the lower bound on the number of pieces, then at the upper bound. Afterwards, we will propose a practical algorithm for designing these packages. We also explore a special kind of interlocking structure which has only one or a small number of movable pieces. For example, a burr puzzle. We will design a few blocks with joints whose combination can be assembled into almost any voxelized 3D model. Our blocks require very simple motions to be assembled, enabling robotic assembly. As proof of concept, we also develop a robot system to assemble the blocks. In some extreme conditions where construction components are small, controlling each component individually is impossible. We will discuss an option using global controls. These global controls can be from gravity or magnetic fields. We show that in some special cases where the small units form a rectangular matrix, rearrangement can be done in a small space following a technique similar to bubble sort algorithm

    p-adic Verification of Class Number Computations

    Get PDF
    The aim of this thesis is to determine if it is possible, using p-adic techniques, to unconditionally evaluate the p-valuation of the class number h of an algebraic number field K. This is important in many areas of number theory, especially Iwasawa theory. The class group ClK of an algebraic number field K is the group of fractional ideals of K modulo principal ideals. Its cardinality (the class number h) is directly linked to the existence of unique factorisation in K, and hence the class group is of core importance to almost all multiplicative problems concerning number fields. The explicit computation of ClK (and h) is a fundamental task in computational number theory. Despite its importance, existing algorithms cannot obtain the class group unconditionally in a reasonable amount of time if the field has a large discriminant. Although faster, specialised algorithms (focused only on calculating the p-valuation) are limited in the cases with which they can deal. We present two algorithms to verify the p-valuation of h for any totally real abelian number field, with no restrictions on p. Both algorithms are based on the p-adic class number formula and work by computing p-adic L-functions Lp(s,χ) at the value of s = 1. These algorithms came about from two different ways of computing Lp(1,χ), using either a closed or a convergent series formula. We prove that our algorithms compare favourably against existing class group algorithms, with superior complexity for number fields of degree 5 or higher. We also demonstrate that our algorithms are faster in practice. Finally, we present some open questions arising from the algorithms

    Experimental and Theoretical Investigation on the Initiation Mechanism of Low-Rank Coal\u27s Self-Heating Process

    Get PDF
    Coal spontaneous combustion remains a safety concern during coal mining, transportation, and storage and requires further investigation to develop effective prevention and control strategies. While various intrinsic and extrinsic conditions may influence coal self-heating, low-rank coals are more prone to spontaneous combustion, especially in humid climates. Therefore, quantifying the effect of extrinsic moisture on the self-heating of low-rank coals is critical for understanding the high propensity of low-rank coals for spontaneous combustion. Many studies have qualitatively evaluated the role of extrinsic moisture on the coal self-heating. Despite this, few studies have quantified the contribution of extrinsic moisture on wetting and oxidation heat and their ultimate impact on coal\u27s tendency for spontaneous combustion. This dissertation seeks to develop experimental and mathematical strategies for quantifying the contribution of extrinsic moisture to low-rank coal self-heating under various intrinsic conditions. As such, a modified R70 experiment setup is designed to effectively introduce moist oxygen and evaluate the impact of extrinsic moisture on coal self-heating. This experimental setup is capable of controlling the temperature and humidity of inlet oxygen while monitoring the relative humidity of the outlet gas in real-time. The comparison between inlet oxygen\u27s enthalpy and coal oxidation heat shows that only a portion of the moist oxygen\u27s enthalpy is consumed to sustain the coal\u27s self-heating. Next, the wetting heat is identified as the differential enthalpy between the inlet and outlet gas and quantified by implementing the photogrammetry technique for the inlet gas’s specific humidity and the real-time measurement of the outlet gas\u27s relative humidity. Following the results of these systematic testing, the quantification of wetting heat is comprehensively investigated by testing different low-rank coal samples under varying air temperatures and humidity levels. Furthermore, the combinatory effect of extrinsic moisture and coal particle size is determined by testing different types of low-rank coals with different particle size distributions. Finally, a mathematical model is developed based on the energy conservation law to simulate the self-heating curve of low-rank coal under various air humidity and temperature levels

    Conceptualising and estimating rationalised agricultural optimisation models

    Get PDF
    Computational modelling for quantitative agricultural policy assessment in the EU employs more farm level oriented approaches in recent years. This follows policy instruments that increasingly target the farm level and have effects varying with farm characteristics. At the same time, methodological advances such as Positive Mathematical Programming (PMP) increased the acceptance of farm level modelling for policy analysis. By introducing non-linear terms into the objective function of programming models, PMP offers an elegant calibration property and smooth simulation response. This thesis addresses the lack of economic rationalisation of PMP and the econometric estimation of alternative model formulation. First, this dissertation analyses the economic rationality of the most often used quadratic PMP model. One potential rationalisation of non-linear terms in the objective function discussed in the literature is a non-linear capacity constraint (CC) representing some aggregate of labour and capital stock. Results show that the equivalence between a quadratic CC formulation and PMP model is limited to the calibration property of the programming model. In terms of simulation behaviour and estimation, the two models differ. Therefore, a quadratic capacity constraint cannot fully rationalise a quadratic PMP model. Nevertheless, it could effectively connect supply models to market models in order to exchange information on primary factor. Second, the thesis examines the consistency of Econometric Mathematical Programming (EMP) models. They allow estimating parameters of non-linear technologies using multiple observations and first-order conditions as estimating equations. The chosen EMP model is a single farm optimisation model with Constant Elasticity of Substitution production functions. A Monte Carlo setup evaluates the consistency of the estimation procedure under different error structures. Results show that the estimated parameters converge to the true values with increasing sample sizes. Finally, the dissertation addresses the lack of statistical inference procedures for EMP models in the literature. Bootstrapped confidence intervals are suggested here and evaluated with respect to the accuracy of the coverage probabilities, again using a Monte Carlo approach. The simulated confidence intervals generally succeed in approximating correct coverage probabilities with sufficient accuracy but results differ somewhat by sampling approach and choice of confidence interval calculation. Keywords: positive mathematical programming, capacity constraint, econometric mathematical programming model, errors in optimisation, bootstrapped confidence intervals.Konzeptualisierung und Schätzung rationalisierter landwirtschaftlicher Optimierungsmodelle Die computerbasierte Modellierung zur quantitativen Analyse der Agrarpolitik in der EU konzentriert sich zunehmend auf die einzelbetriebliche Ebene. Dies folgt der Entwicklung der Politikinstrumente, die direkt auf einzelbetrieblicher Ebene ansetzen und deren Wirkungen von Betriebscharakteristika abhängen. Gleichzeitig unterstützen methodische Entwicklungen wie die Positive Mathematische Programmierung (PMP) die Akzeptanz solcher Modelle in der Politikanalyse. PMP führt nichtlineare Terme in die Zielfunktionen ein und sorgt dadurch für eine elegante Kalibrierung und ein kontinuierliches Simulationsverhalten. Diese Arbeit beschäftigt sich mit der fehlenden ökonomischen Rationalisierung von PMP und der ökonometrischen Schätzung von alternativen Modellformulierungen. Diese Dissertation analysiert zunächst in wieweit das am häufigsten verwendeten quadratischen PMP Modells aus ökonomischer Sicht rationalisiert werden kann. In der Literatur werden nichtlineare Kapazitätsbeschränkung (KB), die ein Aggregat von Arbeit und Kapital darstellt, als theoretische Motivation nichtlineare Terme in der Zielfunktion genannt. Die Ergebnisse dieser Arbeit zeigen, dass sich die Äquivalenz zwischen einer quadratischen KB und einem quadratischen PMP Modell lediglich auf die Kalibrierung des Programmierungsmodells beschränkt. In Bezug auf das Simulationsverhalten bzw. die Modellschätzung unterscheiden sich die beiden Modelle. Somit kann eine quadratische KB ein quadratisches PMP-Modell nicht vollständig rationalisieren. Nichtsdestotrotz könnte es dazu beitragen, Angebotsmodelle und Marktmodelle in Verbindung zu bringen, um Informationen über den Primärfaktor auszutauschen. Die Arbeit überprüft weiterhin die Konsistenz der Ökonometrischen Mathematischen Programmierungsmodelle (ÖMP). Diese ermöglichen die Parameterschätzung von nichtlinearen Technologien mithilfe mehrfacher Beobachtungen und Optimalitätskriterien erster Ordnung als Schätzungsgleichungen. Das ÖMP für diese Arbeit ist ein einzelbetriebliches Optimierungsmodell mit konstanten Substitutionselastizitäten in den Produktionsfunktionen. Die Konsistenz des Schätzverfahrens wird durch ein Monte Carlo Verfahren mit unterschiedlichen Fehlerstrukturen ausgewertet. Die Ergebnisse zeigen, dass sich die geschätzten Parameter an die wahren Werte mit zunehmendem Stichprobenumfang annähern. Abschließend, wird ein Verfahren zur statistischen Inferenz für ÖMP eingeführt und damit eine Lücke in der Literatur geschlossen. Die Arbeit verwendet Bootstrapping um-Konfidenzintervalle abzuleiten und evaluiert diese, ebenfalls mit Hilfe eines Monte Carlo Verfahrens, hinsichtlich der Genauigkeit der Überdeckungswahrscheinlichkeiten. Im Allgemeinen gelingt es den simulierten Konfidenzintervallen sich mit ausreichender Genauigkeit den korrekten Überdeckungswahrscheinlichkeiten anzunähern. Die Ergebnisse unterscheiden sich jedoch je nach Auswahl des Stichprobenverfahrens und der Berechnungsmethode des Konfidenzintervalls. Schlüsselwörter: positive mathematische Programmierung, Kapazitätsbeschränkung, ökonometrisches mathematisches Programmierungsmodell, Fehler in der Optimierung, Bootstrap-Konfidenzintervall

    Localized Mobility Management for SDN-Integrated LTE Backhaul Networks

    Get PDF
    Small cell (SCell) and Software Define Network (SDN) are two key enablers to meet the evolutional requirements of future telecommunication networks, but still on the initial study stage with lots of challenges faced. In this paper, the problem of mobility management in SDN-integrated LTE (Long Term Evolution) mobile backhaul network is investigated. An 802.1ad double tagging scheme is designed for traffic forwarding between Serving Gateway (S-GW) and SCell with QoS (Quality of Service) differentiation support. In addition, a dynamic localized forwarding scheme is proposed for packet delivery of the ongoing traffic session to facilitate the mobility of UE within a dense SCell network. With this proposal, the data packets of an ongoing session can be forwarded from the source SCell to the target SCell instead of switching the whole forwarding path, which can drastically save the path-switch signalling cost in this SDN network. Numerical results show that compared with traditional path switch policy, more than 50 signalling cost can be reduced, even considering the impact on the forwarding path deletion when session ceases. The performance of data delivery is also analysed, which demonstrates the introduced extra delivery cost is acceptable and even negligible in case of short forwarding chain or large backhaul latency
    • …
    corecore